predictive parity
- Banking & Finance > Credit (0.48)
- Health & Medicine (0.47)
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Asia > China > Chongqing Province > Chongqing (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- (6 more...)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.68)
Fair Bayes-Optimal Classifiers Under Predictive Parity
Increasing concerns about disparate effects of AI have motivated a great deal of work on fair machine learning. Existing works mainly focus on independence-and separation-based measures (e.g., demographic parity, equality of opportunity, equalized odds), while sufficiency-based measures such as predictive parity are much less studied. This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups. We prove that, if the overall performances of different groups vary only moderately, all fair Bayes-optimal classifiers under predictive parity are group-wise thresholding rules. Perhaps surprisingly, this may not hold if group performance levels vary widely; in this case, we find that predictive parity among protected groups may lead to within-group unfairness. We then propose an algorithm we call FairBayes-DPP, aiming to ensure predictive parity when our condition is satisfied. FairBayes-DPP is an adaptive thresholding algorithm that aims to achieve predictive parity, while also seeking to maximize test accuracy. We provide supporting experiments conducted on synthetic and empirical data.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands (0.04)
- Law (1.00)
- Health & Medicine (1.00)
- Education (0.67)
- Banking & Finance > Credit (0.67)
Are Stereotypes Leading LLMs' Zero-Shot Stance Detection ?
Dubreuil, Anthony, Gourru, Antoine, Largeron, Christine, Trabelsi, Amine
Large Language Models inherit stereotypes from their pretraining data, leading to biased behavior toward certain social groups in many Natural Language Processing tasks, such as hateful speech detection or sentiment analysis. Surprisingly, the evaluation of this kind of bias in stance detection methods has been largely overlooked by the community. Stance Detection involves labeling a statement as being against, in favor, or neutral towards a specific target and is among the most sensitive NLP tasks, as it often relates to political leanings. In this paper, we focus on the bias of Large Language Models when performing stance detection in a zero-shot setting. We automatically annotate posts in pre-existing stance detection datasets with two attributes: dialect or vernacular of a specific group and text complexity/readability, to investigate whether these attributes influence the model's stance detection decisions. Our results show that LLMs exhibit significant stereotypes in stance detection tasks, such as incorrectly associating pro-marijuana views with low text complexity and African American dialect with opposition to Donald Trump.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > Canada > Quebec > Estrie Region > Sherbrooke (0.04)
- (3 more...)
A Disparity Metric Definitions 566 A.1 Observational Metrics
U 2 U that influences all of the variables U influences. Figure 5: Example of step one in the marginalisation, taken from Evans [22]. In this section we analyse the datasets presented in Le Quy et al. For each bias we provide a justification of our decision. Therefore we drop them from the analysis. Diabetes For this dataset, the goal is to predict if a patient will be readmitted in the next 30 days.
- Health & Medicine (0.67)
- Banking & Finance > Credit (0.48)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands (0.04)
- Law (1.00)
- Health & Medicine (1.00)
- Banking & Finance > Credit (0.67)
- Education > Curriculum > Subject-Specific Education (0.48)
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Asia > China > Chongqing Province > Chongqing (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- (6 more...)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.68)
Fair Bayes-Optimal Classifiers Under Predictive Parity
Increasing concerns about disparate effects of AI have motivated a great deal of work on fair machine learning. Existing works mainly focus on independence- and separation-based measures (e.g., demographic parity, equality of opportunity, equalized odds), while sufficiency-based measures such as predictive parity are much less studied. This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups. We prove that, if the overall performances of different groups vary only moderately, all fair Bayes-optimal classifiers under predictive parity are group-wise thresholding rules. Perhaps surprisingly, this may not hold if group performance levels vary widely; in this case, we find that predictive parity among protected groups may lead to within-group unfairness.
Properties of fairness measures in the context of varying class imbalance and protected group ratios
Brzezinski, Dariusz, Stachowiak, Julia, Stefanowski, Jerzy, Szczech, Izabela, Susmaga, Robert, Aksenyuk, Sofya, Ivashka, Uladzimir, Yasinskyi, Oleksandr
Society is increasingly relying on predictive models in fields like criminal justice, credit risk management, or hiring. To prevent such automated systems from discriminating against people belonging to certain groups, fairness measures have become a crucial component in socially relevant applications of machine learning. However, existing fairness measures have been designed to assess the bias between predictions for protected groups without considering the imbalance in the classes of the target variable. Current research on the potential effect of class imbalance on fairness focuses on practical applications rather than dataset-independent measure properties. In this paper, we study the general properties of fairness measures for changing class and protected group proportions. For this purpose, we analyze the probability mass functions of six of the most popular group fairness measures. We also measure how the probability of achieving perfect fairness changes for varying class imbalance ratios. Moreover, we relate the dataset-independent properties of fairness measures described in this paper to classifier fairness in real-life tasks. Our results show that measures such as Equal Opportunity and Positive Predictive Parity are more sensitive to changes in class imbalance than Accuracy Equality. These findings can help guide researchers and practitioners in choosing the most appropriate fairness measures for their classification problems.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Poland > Greater Poland Province > Poznań (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (8 more...)
- Banking & Finance > Credit (0.34)
- Information Technology > Security & Privacy (0.34)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Inductive Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.67)